Depth estimation is usually ill-posed and ambiguous for monocular camera-based 3D multi-person pose estimation. Since LiDAR can capture accurate depth information in long-range scenes, it can benefit both the global localization of individuals and the 3D pose estimation by providing rich geometry features. Motivated by this, we propose a monocular camera and single LiDAR-based method for 3D multi-person pose estimation in large-scale scenes, which is easy to deploy and insensitive to light. Specifically, we design an effective fusion strategy to take advantage of multi-modal input data, including images and point cloud, and make full use of temporal information to guide the network to learn natural and coherent human motions. Without relying on any 3D pose annotations, our method exploits the inherent geometry constraints of point cloud for self-supervision and utilizes 2D keypoints on images for weak supervision. Extensive experiments on public datasets and our newly collected dataset demonstrate the superiority and generalization capability of our proposed method.
translated by 谷歌翻译
In this work, a machine learning approach is developed for predicting the outcomes of football matches. The novelty of this research lies in the utilisation of the Kelly Index to first classify matches into categories where each one denotes the different levels of predictive difficulty. Classification models using a wide suite of algorithms were developed for each category of matches in order to determine the efficacy of the approach. In conjunction to this, a set of previously unexplored features were engineering including Elo-based variables. The dataset originated from the Premier League match data covering the 2019-2021 seasons. The findings indicate that the process of decomposing the predictive problem into sub-tasks was effective and produced competitive results with prior works, while the ensemble-based methods were the most effective. The paper also devised an investment strategy in order to evaluate its effectiveness by benchmarking against bookmaker odds. An approach was developed that minimises risk by combining the Kelly Index with the predefined confidence thresholds of the predictive models. The experiments found that the proposed strategy can return a profit when following a conservative approach that focuses primarily on easy-to-predict matches where the predictive models display a high confidence level.
translated by 谷歌翻译
由直觉的激励,即在相应的3D点云中定位2D图像的关键步骤正在建立它们之间的2d-3d对应关系,我们提出了第一个基于特征的密度通信框架,以解决图像到点云注册问题,称为Corri2p,由三个模块组成,即特征嵌入,对称重叠区域检测和通过已建立的对应关系构成估计。具体而言,给定一对2D图像和3D点云,我们首先将它们转换为高维特征空间,并将结果特征馈入对称重叠区域检测器,以确定图像和点云相互重叠的区域。然后,我们使用重叠区域的功能在RANSAC内运行EPNP之前以估算相机的姿势,以建立2D-3D对应关系。 Kitti和Nuscenes数据集的实验结果表明,我们的Corri2p优于最先进的图像到点云注册方法。我们将公开提供代码。
translated by 谷歌翻译
对于语义引导的跨视图图像翻译,至关重要的是要了解从源视图图像进行示例像素以及在目标视图语义映射引导下对它们进行重新分配的位置,尤其是当源之间几乎没有重叠或急剧的视图差异时和目标图像。因此,一个不仅需要编码源视图图像和目标查看语义映射中的像素之间的长距离依赖关系,而且还需要转换这些学到的依赖关系。为此,我们提出了一个新颖的生成对抗网络Pi-Trans,该网络主要由一个新型的平行-CONVMLP模块和一个在多个语义级别上的隐式转换模块组成。广泛的实验结果表明,与两个具有挑战性的数据集中的最新方法相比,拟议的Pi-Trans通过较大的边缘实现了最佳的定性和定量性能。该代码将在https://github.com/amazingren/pi-trans上提供。
translated by 谷歌翻译
为了促进更好的性能带宽权衡,以实现多种代理人的感知,我们提出了一种新颖的蒸馏协作图(光盘),以模拟代理商之间的培训,姿势感知和适应性协作。我们的主要新科特迪斯在两个方面。首先,我们提出了一位教师学生框架通过知识蒸馏训练光盘。教师模型采用与全面查看输入的早期合作;学生模型基于中间协作与单视图输入。我们的框架通过在学生模型中约束协作后的特征地图来列进讨论,以匹配教师模型的对应关系。其次,我们提出了矩阵值的边缘重量。在这样的矩阵中,每个元素将互及的间歇注意力反映在特定空间区域,允许代理自适应地突出显示信息区域。在推论期间,我们只需要使用名为Distilled Collaboration Network的学生模型(Disconet)。归因于师生框架,具有共享Disconet的多个代理商可以协作地与整体视图进行假设教师模型的表现。我们的方法在V2X-SIM 1.0上验证了我们使用Carla和Sumo Co-Simulation合成的大规模多代理感知数据集。我们在多代理3D对象检测中的定量和定性实验表明,Disconet不仅可以实现比最先进的协作的感知方法更好的性能带宽权衡,而且还带来了更直接的设计理由。我们的代码可在https://github.com/ai4ce/disconet上找到。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
Real-world robotic grasping can be done robustly if a complete 3D Point Cloud Data (PCD) of an object is available. However, in practice, PCDs are often incomplete when objects are viewed from few and sparse viewpoints before the grasping action, leading to the generation of wrong or inaccurate grasp poses. We propose a novel grasping strategy, named 3DSGrasp, that predicts the missing geometry from the partial PCD to produce reliable grasp poses. Our proposed PCD completion network is a Transformer-based encoder-decoder network with an Offset-Attention layer. Our network is inherently invariant to the object pose and point's permutation, which generates PCDs that are geometrically consistent and completed properly. Experiments on a wide range of partial PCD show that 3DSGrasp outperforms the best state-of-the-art method on PCD completion tasks and largely improves the grasping success rate in real-world scenarios. The code and dataset will be made available upon acceptance.
translated by 谷歌翻译
This paper presents a practical global optimization algorithm for the K-center clustering problem, which aims to select K samples as the cluster centers to minimize the maximum within-cluster distance. This algorithm is based on a reduced-space branch and bound scheme and guarantees convergence to the global optimum in a finite number of steps by only branching on the regions of centers. To improve efficiency, we have designed a two-stage decomposable lower bound, the solution of which can be derived in a closed form. In addition, we also propose several acceleration techniques to narrow down the region of centers, including bounds tightening, sample reduction, and parallelization. Extensive studies on synthetic and real-world datasets have demonstrated that our algorithm can solve the K-center problems to global optimal within 4 hours for ten million samples in the serial mode and one billion samples in the parallel mode. Moreover, compared with the state-of-the-art heuristic methods, the global optimum obtained by our algorithm can averagely reduce the objective function by 25.8% on all the synthetic and real-world datasets.
translated by 谷歌翻译
Score-based diffusion models have captured widespread attention and funded fast progress of recent vision generative tasks. In this paper, we focus on diffusion model backbone which has been much neglected before. We systematically explore vision Transformers as diffusion learners for various generative tasks. With our improvements the performance of vanilla ViT-based backbone (IU-ViT) is boosted to be on par with traditional U-Net-based methods. We further provide a hypothesis on the implication of disentangling the generative backbone as an encoder-decoder structure and show proof-of-concept experiments verifying the effectiveness of a stronger encoder for generative tasks with ASymmetriC ENcoder Decoder (ASCEND). Our improvements achieve competitive results on CIFAR-10, CelebA, LSUN, CUB Bird and large-resolution text-to-image tasks. To the best of our knowledge, we are the first to successfully train a single diffusion model on text-to-image task beyond 64x64 resolution. We hope this will motivate people to rethink the modeling choices and the training pipelines for diffusion-based generative models.
translated by 谷歌翻译
Given an untrimmed video and natural language query, video sentence grounding aims to localize the target temporal moment in the video. Existing methods mainly tackle this task by matching and aligning semantics of the descriptive sentence and video segments on a single temporal resolution, while neglecting the temporal consistency of video content in different resolutions. In this work, we propose a novel multi-resolution temporal video sentence grounding network: MRTNet, which consists of a multi-modal feature encoder, a Multi-Resolution Temporal (MRT) module, and a predictor module. MRT module is an encoder-decoder network, and output features in the decoder part are in conjunction with Transformers to predict the final start and end timestamps. Particularly, our MRT module is hot-pluggable, which means it can be seamlessly incorporated into any anchor-free models. Besides, we utilize a hybrid loss to supervise cross-modal features in MRT module for more accurate grounding in three scales: frame-level, clip-level and sequence-level. Extensive experiments on three prevalent datasets have shown the effectiveness of MRTNet.
translated by 谷歌翻译